Sky Publishing Corporation
The Stellar Magnitude System
By Alan MacRobert
Adapted from Sky & Telescope

MOST WAYS OF COUNTING and measuring things work logically. When the thing you're measuring increases, the number gets bigger. When you gain weight, the scale doesn't tell you a smaller number of kilograms or pounds. But things are not so sensible in astronomy, at least not when it comes to the brightnesses of stars.

Star magnitudes do count backward, the result of an ancient fluke that seemed like a good idea at the time. Since then the history of the magnitude scale is, like so much else in astronomy, the history of increasing scientific precision being built on an ungainly historical foundation that was too deeply rooted for anyone to bulldoze it and start fresh.

The story begins around 129 B.C., when the Greek astronomer Hipparchus produced the first well-known star catalog. Hipparchus ranked his stars in a simple way. He called the brightest ones "of the first magnitude," simply meaning "the biggest." Stars not so bright he called "of the second magnitude," second biggest. The faintest stars he could see he called "of the sixth magnitude." This system was copied by Claudius Ptolemy in his own list of stars around A.D. 140. Sometimes Ptolemy added the words "greater" or "smaller" to distinguish between stars within a magnitude class. Ptolemy's works remained the basic astronomy texts for the next 1,400 years, so everyone used the system of first to sixth magnitudes. It worked just fine.

Galileo forced the first change. On turning his newly made telescopes to the sky, Galileo discovered that stars existed that were fainter than Ptolemy's sixth magnitude. "Indeed, with the glass you will detect below stars of the sixth magnitude such a crowd of others that escape natural sight that it is hardly believable," he exulted in his 1610 tract, Sidereus Nuncius. "The largest of these...we may designate as of the seventh magnitude...." Thus did a new term enter the astronomical language, and the magnitude scale became open-ended. Now there could be no turning back.

As telescopes got bigger and better, astronomers kept adding more magnitudes to the bottom of the scale. Today a pair of 50-millimeter binoculars will show stars of about 9th magnitude, a 6-inch amateur telescope will reach to 13th, and the Hubble Space Telescope has seen objects as faint as 30th magnitude.

By the middle of the 19th century astronomers realized there was a pressing need to define the entire magnitude scale, both telescopic and naked-eye, more precisely than by eyeball judgment. They had already determined that a 1st-magnitude star shines with about 100 times the light of a 6th-magnitude star. Accordingly, in 1856 the Oxford astronomer Norman R. Pogson proposed that a difference of five magnitudes be defined as a brightness ratio of exactly 100 to 1. This convenient rule was quickly adopted. One magnitude thus corresponds to a brightness difference of exactly the fifth root of 100, or very close to 2.512 -- a value known as the Pogson ratio.

The Meaning of Magnitudes
This difference
in magnitude...
...means this ratio
in brightness
0
1 to 1
0.1
1.1 to 1
0.2 1.2 to 1
0.3 1.3 to 1
0.4 1.4 to 1
0.5 1.6 to 1
0.6 1.7 to 1
0.7 1.9 to 1
0.8 2.1 to 1
0.9 2.3 to 1
1.0 2.5 to 1
1.5 4.0 to 1
2 6.3 to 1
2.5 10 to 1
3 16 to 1
4 40 to 1
5 100 to 1
6 251 to 1
7.5 1,000 to 1
10 10,000 to 1
15 1,000,000 to 1
20 100,000,000 to 1

The resulting magnitude scale is logarithmic, in neat agreement with the 1850s belief that all human senses are logarithmic in their response to stimuli. (The decibel scale for rating loudness was likewise made logarithmic.) Alas, it's not quite so, not for brightness, sound, or anything else. Our perceptions of the world follow power-law curves, not logarithmic ones. Thus a star of magnitude 3.0 does not in fact look exactly halfway in brightness between 2.0 and 4.0. It looks a little fainter than that. The star that looks halfway between 2.0 and 4.0 will be about magnitude 2.8. The wider the magnitude gap, the greater this discrepancy. Accordingly, Sky & Telescope's computer-drawn sky maps use star dots that are sized according to a power-law relation (see the March 1990 issue, page 311).

But the scientific world in the 1850s was gaga for logarithms, so now they are locked into the magnitude system as firmly as Hipparchus's backward numbering.

Now that star magnitudes were ranked on a precise scale, however ill-fitting a one, another problem became unavoidable. Some "1st-magnitude" stars were a whole lot brighter than others. Astronomers had no choice but to extend the scale out to brighter values as well as faint ones. Thus Rigel, Capella, Arcturus, and Vega are magnitude 0 -- an awkward statement that might sound like they have no brightness at all. But it was too late to start over. The magnitude scale extends farther down into negative numbers: Sirius shines at magnitude -1.5, Venus reaches -4.4, the full Moon is about -12.5, and the Sun blazes at magnitude -26.7.

Other Colors, Other Magnitudes

By the late 19th century astronomers were using photography to record the sky and measure star brightnesses, and a new problem cropped up. Some stars having the same brightness to the eye showed different brightnesses on film, and vice versa. Compared to the eye, photographic emulsions were more sensitive to blue light and less so to red light.

Accordingly, two separate scales were devised. Visual magnitude, or mvis, described how a star looked to the eye. Photographic magnitude, or mpg, referred to star images on blue-sensitive black-and-white film. These are now abbreviated mv and mp.

This complication turned out to be a blessing in disguise. The difference between photographic and visual magnitudes was a convenient measure of a star's color. The difference between the two kinds of magnitude was named the "color index." Its value is increasingly positive for yellow, orange, and red stars, and negative for blue ones.

But different photographic emulsions have different spectral responses! And people's eyes differ too. For one thing, your eye lenses turn yellow with age; old people see the world through yellow filters (S&T: September 1991, page 254). Magnitude systems designed for different wavelength ranges had to be more firmly grounded than this.

Today, precise magnitudes are specified by what a standard photoelectric photometer sees through standard color filters. Several photometric systems have been devised; the most familiar is called UBV after the three filters most commonly used. U encompasses the near-ultraviolet, B is blue, and V corresponds fairly closely to the old visual magnitude; its wide peak is in the yellow-green band, where the eye is most sensitive.

Color index is now defined as the B magnitude minus the V magnitude. A pure white star has a B-V of about 0.2, our yellow Sun is 0.63, orange-red Betelgeuse is 1.85, and the bluest star believed possible is -0.4, pale blue-white (see "The Truth About Star Colors," S&T: September 1992, page 266).

So successful was the UBV system that it was extended redward with R and I filters to define standard red and near-infrared magnitudes. Hence it is sometimes called UBVRI. Infrared astronomers have carried it to still longer wavelengths, picking up alphabetically after I to define the J, K, L, M, N, and Q bands (S&T: June 1995, page 23). These were chosen to match the wavelengths of infrared "windows" in the atmosphere where absorption by water vapor does not entirely block the view.

Appearance and Reality

What, then, is an object's real brightness? How much total energy is it sending to us at all wavelengths combined, visible and invisible?

The answer is called the bolometric magnitude, mbol, because total radiation was once measured with a device called a bolometer. The bolometric magnitude has been called the God's-eye view of an object's true luster. Astrophysicists value it as the true measure of energy emission as seen from the location of Earth. The bolometric correction tells how much brighter the bolometric magnitude is than the V magnitude. Its value is always negative, because any star or object emits at least some radiation outside the visual range.

Up to now we've been dealing only with apparent magnitudes -- how bright things look from Earth. We don't know how intrinsically bright an object is until we also take its distance into account. Thus astronomers created the absolute magnitude scale. An object's absolute magnitude is simply how bright it would appear if placed at a standard distance of 10 parsecs (32.6 light-years).

Seen from this distance, the Sun would shine at an unimpressive visual magnitude 4.85. Rigel would blaze at a dazzling -8, nearly as bright as the quarter Moon. The red dwarf Proxima Centauri, the closest star to the solar system, would appear to be magnitude 15.6, the tiniest little glimmer visible in a 16-inch telescope! Knowing absolute magnitudes makes plain how vastly diverse are the objects that we casually lump together under the single word "star."

Absolute magnitudes are always written with a capital M, apparent magnitudes with a lower-case m. Any type of apparent magnitude -- photographic, bolometric, or whatever -- can be converted to absolute.

Lastly, for comets and asteroids a very different "absolute magnitude" is used. It tells how bright they would appear to an observer standing on the Sun if the object were one astronomical unit away.

So, are magnitudes too complicated? Not at all. They're as simple as they can be considering their historical roots and what they have to describe today. Hipparchus would be enchanted.

Alan MacRobert is an associate editor of Sky & Telescope magazine and an avid backyard astronomer.

[ Sky Publishing's Company Page | Guide to Backyard Astronomy ]
  Sky and Telescope ®
Sky Publishing Corporation
49 Bay State Road, Cambridge, MA 02138, U.S.A.
Phone: 800-253-0245 (U.S./Can.), +1 617-864-7360 (Int'l.)
Fax: +1 617-864-6117; E-mail: skytel@skypub.com

© 2000 Sky Publishing Corp. All rights reserved.